Non-invasive prostate cancer detection from MRI has the potential to revolutionize patient care by providing early detection of clinically-significant disease (ISUP grade group >= 2), but has thus far shown limited positive predictive value. To address this, we present an MRI-based deep learning method for predicting clinically significant prostate cancer applicable to a patient population with subsequent ground truth biopsy results ranging from benign pathology to ISUP grade group~5. Specifically, we demonstrate that mixed supervision via diverse histopathological ground truth improves classification performance despite the cost of reduced concordance with image-based segmentation. That is, where prior approaches have utilized pathology results as ground truth derived from targeted biopsies and whole-mount prostatectomy to strongly supervise the localization of clinically significant cancer, our approach also utilizes weak supervision signals extracted from nontargeted systematic biopsies with regional localization to improve overall performance. Our key innovation is performing regression by distribution rather than simply by value, enabling use of additional pathology findings traditionally ignored by deep learning strategies. We evaluated our model on a dataset of 973 (testing n=160) multi-parametric prostate MRI exams collected at UCSF from 2015-2018 followed by MRI/ultrasound fusion (targeted) biopsy and systematic (nontargeted) biopsy of the prostate gland, demonstrating that deep networks trained with mixed supervision of histopathology can significantly exceed the performance of the Prostate Imaging-Reporting and Data System (PI-RADS) clinical standard for prostate MRI interpretation.
translated by 谷歌翻译
培训代理商的训练人群在加强学习方面表现出了巨大的希望,可以稳定训练,改善探索和渐近性能以及产生各种解决方案。但是,从业人员通常不考虑基于人群的培训,因为它被认为是过速的(依次实施),或者在计算上昂贵(如果代理在独立加速器上并行训练)。在这项工作中,我们比较了实施和重新审视以前的研究,以表明对汇编和矢量化的明智使用允许与培训单个代理相比,在单台机器上进行基于人群的培训。我们还表明,当提供一些加速器时,我们的协议扩展到诸如高参数调谐等应用的大型人口大小。我们希望这项工作和公众发布我们的代码将鼓励从业者更频繁地使用基于人群的学习来进行研究和应用。
translated by 谷歌翻译
在本文中,我们回顾了同时正电子发射断层扫描(PET) /磁共振成像(MRI)系统的物理和数据驱动的重建技术,这些技术在癌症,神经系统疾病和心脏病方面具有显着优势。这些重建方法利用结构或统计的先验,以及基于物理学的宠物系统响应的描述。但是,由于正向问题的嵌套表示,直接的PET/MRI重建是一个非线性问题。我们阐明了多方面的方法如何适应3D PET/MRI重建的混合数据和物理驱动的机器学习,总结了过去5年中重要的深度学习发展,以解决衰减校正,散射,低光子数和数据一致性。我们还描述了这些多模式方法的应用如何扩展到PET/MRI以提高放射治疗计划的准确性。最后,我们讨论了遵循物理和深度学习的计算成像和下一代探测器硬件的最新趋势,以扩展当前最新趋势的机会。
translated by 谷歌翻译
从历史上看,患者数据集已用于开发和验证PET/MRI和PET/CT的各种重建算法。为了使这种算法开发,无需获得数百个患者检查,在本文中,我们展示了一种深度学习技术,可以从丰富的全身MRI中产生合成但逼真的全身宠物纹状体。具体来说,我们使用56 $^{18} $ F-FDG-PET/MRI考试的数据集训练3D残差UNET来预测全身T1加权MRI的生理PET摄取。在训练中,我们实施了平衡的损失函数,以在较大的动态范围内产生逼真的吸收,并沿着层析成像线的响应线对模仿宠物的获取产生计算的损失。预测的PET图像预计会产生合成宠物飞行时间(TOF)正式图,可与供应商提供的PET重建算法一起使用,包括使用基于CT的衰减校正(CTAC)和基于MR的衰减校正(MRAC(MRAC) )。由此产生的合成数据概括了生理学$^{18} $ f-fdg摄取,例如高摄取量位于大脑和膀胱,以及肝脏,肾脏,心脏和肌肉的吸收。为了模拟高摄取的异常,我们还插入合成病变。我们证明,该合成PET数据可以与实际PET数据互换使用,用于比较CT和基于MR的衰减校正方法的PET量化任务,与使用真实数据相比,在平均值中实现了$ \ leq 7.6 \%$误差。这些结果共同表明,所提出的合成PET数据管道可以合理地用于开发,评估和验证PET/MRI重建方法。
translated by 谷歌翻译
人类视野的一个基本组成部分是我们解析复杂的视觉场景并判断其组成物体之间的关系的能力。近年来,随着最先进的系统在其中一些基准上达到人类的准确性,近年来,视觉推理的AI基准驱动了快速进步。然而,就样本效率而言,人类和AI系统学习新的视觉推理任务的样本效率仍然存在。人类在学习方面的非凡效率至少部分归因于其利用组成性的能力,以便他们可以在学习新任务时有效利用先前获得的知识。在这里,我们介绍了一种新颖的视觉推理基准组成视觉关系(CVR),以推动发展更多数据有效学习算法的进步。我们从流体智能和非语言推理测试中汲取灵感,并描述一种新的方法,用于创建抽象规则和相关图像数据集的组成。我们提出的基准包括跨任务规则的样本效率,概括和转移的度量,以及利用组合性的能力。我们系统地评估现代神经体系结构,发现令人惊讶的是,在大多数数据制度中,卷积架构在所有性能指标中都超过了基于变压器的体系结构。但是,即使在使用自学意见书学习信息性的视觉表示之后,与人类相比,所有计算模型的数据效率要少得多。总体而言,我们希望我们的挑战能够激发人们对可以学会利用构图朝着更高效学习的神经体系结构发展的兴趣。
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
Edge computing is changing the face of many industries and services. Common edge computing models offload computing which is prone to security risks and privacy violation. However, advances in deep learning enabled Internet of Things (IoTs) to take decisions and run cognitive tasks locally. This research introduces a decentralized-control edge model where most computation and decisions are moved to the IoT level. The model aims at decreasing communication to the edge which in return enhances efficiency and decreases latency. The model also avoids data transfer which raises security and privacy risks. To examine the model, we developed SAFEMYRIDES, a scene-aware ridesharing monitoring system where smart phones are detecting violations at the runtime. Current real-time monitoring systems are costly and require continuous network connectivity. The system uses optimized deep learning that run locally on IoTs to detect violations in ridesharing and record violation incidences. The system would enhance safety and security in ridesharing without violating privacy.
translated by 谷歌翻译
Cognitive Computing (COC) aims to build highly cognitive machines with low computational resources that respond in real-time. However, scholarly literature shows varying research areas and various interpretations of COC. This calls for a cohesive architecture that delineates the nature of COC. We argue that if Herbert Simon considered the design science is the science of artificial, cognitive systems are the products of cognitive science or 'the newest science of the artificial'. Therefore, building a conceptual basis for COC is an essential step into prospective cognitive computing-based systems. This paper proposes an architecture of COC through analyzing the literature on COC using a myriad of statistical analysis methods. Then, we compare the statistical analysis results with previous qualitative analysis results to confirm our findings. The study also comprehensively surveys the recent research on COC to identify the state of the art and connect the advances in varied research disciplines in COC. The study found that there are three underlaying computing paradigms, Von-Neuman, Neuromorphic Engineering and Quantum Computing, that comprehensively complement the structure of cognitive computation. The research discuss possible applications and open research directions under the COC umbrella.
translated by 谷歌翻译
When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.
translated by 谷歌翻译
Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets.
translated by 谷歌翻译